233 research outputs found

    Combating catastrophic forgetting with developmental compression

    Full text link
    Generally intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting: sequential learning corrupts knowledge obtained earlier in the sequence, or tasks antagonistically compete for system resources. Methods for obviating catastrophic forgetting have sought to identify and preserve features of the system necessary to solve one problem when learning to solve another, or to enforce modularity such that minimally overlapping sub-functions contain task specific knowledge. While successful, both approaches scale poorly because they require larger architectures as the number of training instances grows, causing different parts of the system to specialize for separate subsets of the data. Here we present a method for addressing catastrophic forgetting called developmental compression. It exploits the mild impacts of developmental mutations to lessen adverse changes to previously-evolved capabilities and `compresses' specialized neural networks into a generalized one. In the absence of domain knowledge, developmental compression produces systems that avoid overt specialization, alleviating the need to engineer a bespoke system for every task permutation and suggesting better scalability than existing approaches. We validate this method on a robot control problem and hope to extend this approach to other machine learning domains in the future

    Investigating the complementary value of discrete choice experiments for the evaluation of barriers and facilitators in implementation research: a questionnaire survey

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The potential barriers and facilitators to change should guide the choice of implementation strategy. Implementation researchers believe that existing methods for the evaluation of potential barriers and facilitators are not satisfactory. Discrete choice experiments (DCE) are relatively new in the health care sector to investigate preferences, and may be of value in the field of implementation research. The objective of our study was to investigate the complementary value of DCE for the evaluation of barriers and facilitators in implementation research.</p> <p>Methods</p> <p>Clinical subject was the implementation of the guideline for breast cancer surgery in day care. We identified 17 potential barriers and facilitators to the implementation of this guideline. We used a traditional questionnaire that was made up of statements about the potential barriers and facilitators. Respondents answered 17 statements on a five-point scale ranging from one (fully disagree) to five (fully agree). The potential barriers and facilitators were included in the DCE as decision attributes. Data were gathered among anaesthesiologists, surgical oncologists, and breast care nurses by means of a paper-and-pencil questionnaire.</p> <p>Results</p> <p>The overall response was 10%. The most striking finding was that the responses to the traditional questionnaire hardly differentiated between barriers. Forty-seven percent of the respondents thought that DCE is an inappropriate method. These respondents considered DCE too difficult and too time-consuming. Unlike the traditional questionnaire, the results of a DCE provide implementation researchers and clinicians with a relative attribute importance ranking that can be used to prioritize potential barriers and facilitators to change, and hence to better fine-tune the implementation strategies to the specific problems and challenges of a particular implementation process.</p> <p>Conclusion</p> <p>The results of our DCE and traditional questionnaire would probably lead to different implementation strategies. Although there is no 'gold standard' for prioritising potential barriers and facilitators to the implementation of change, theoretically, DCE would be the method of choice. However, the feasibility of using DCE was less favourable. Further empirical applications should investigate whether DCE can really make a valuable contribution to the implementation science.</p

    iSAM2 : incremental smoothing and mapping using the Bayes tree

    Get PDF
    Author Posting. © The Author(s), 2011. This is the author's version of the work. It is posted here by permission of Sage for personal use, not for redistribution. The definitive version was published in International Journal of Robotics Research 31 (2012): 216-235, doi:10.1177/0278364911430419.We present a novel data structure, the Bayes tree, that provides an algorithmic foundation enabling a better understanding of existing graphical model inference algorithms and their connection to sparse matrix factorization methods. Similar to a clique tree, a Bayes tree encodes a factored probability density, but unlike the clique tree it is directed and maps more naturally to the square root information matrix of the simultaneous localization and mapping (SLAM) problem. In this paper, we highlight three insights provided by our new data structure. First, the Bayes tree provides a better understanding of the matrix factorization in terms of probability densities. Second, we show how the fairly abstract updates to a matrix factorization translate to a simple editing of the Bayes tree and its conditional densities. Third, we apply the Bayes tree to obtain a completely novel algorithm for sparse nonlinear incremental optimization, named iSAM2, which achieves improvements in efficiency through incremental variable re-ordering and fluid relinearization, eliminating the need for periodic batch steps. We analyze various properties of iSAM2 in detail, and show on a range of real and simulated datasets that our algorithm compares favorably with other recent mapping algorithms in both quality and efficiency.M. Kaess, H. Johannsson and J. Leonard were partially supported by ONR grants N00014-06-1-0043 and N00014-10-1-0936. F. Dellaert and R. Roberts were partially supported by NSF, award number 0713162, “RI: Inference in Large-Scale Graphical Models”. V. Ila has been partially supported by the Spanish MICINN under the Programa Nacional de Movilidad de Recursos Humanos de Investigación

    Evolving Synaptic Plasticity with an Evolutionary Cellular Development Model

    Get PDF
    Since synaptic plasticity is regarded as a potential mechanism for memory formation and learning, there is growing interest in the study of its underlying mechanisms. Recently several evolutionary models of cellular development have been presented, but none have been shown to be able to evolve a range of biological synaptic plasticity regimes. In this paper we present a biologically plausible evolutionary cellular development model and test its ability to evolve different biological synaptic plasticity regimes. The core of the model is a genomic and proteomic regulation network which controls cells and their neurites in a 2D environment. The model has previously been shown to successfully evolve behaving organisms, enable gene related phenomena, and produce biological neural mechanisms such as temporal representations. Several experiments are described in which the model evolves different synaptic plasticity regimes using a direct fitness function. Other experiments examine the ability of the model to evolve simple plasticity regimes in a task -based fitness function environment. These results suggest that such evolutionary cellular development models have the potential to be used as a research tool for investigating the evolutionary aspects of synaptic plasticity and at the same time can serve as the basis for novel artificial computational systems

    Удовлетворенность сотрудников трудом как фактор повышения эффективности деятельности военного факультета БГУИР

    Get PDF
    Accurate navigation is a fundamental requirement for robotic systems—marine and terrestrial. For an intelligent autonomous system to interact effectively and safely with its environment, it needs to accurately perceive its surroundings. While traditional dead-reckoning filtering can achieve extremely low drift rates, the localization accuracy decays monotonically with distance traveled. Other approaches (such as external beacons) can help; nonetheless, the typical prerogative is to remain at a safe distance and to avoid engaging with the environment. In this chapter we discuss alternative approaches which utilize onboard sensors so that the robot can estimate the location of sensed objects and use these observations to improve its own navigation as well as its perception of the environment. This approach allows for meaningful interaction and autonomy. Three motivating autonomous underwater vehicle (AUV) applications are outlined herein. The first fuses external range sensing with relative sonar measurements. The second application localizes relative to a prior map so as to revisit a specific feature, while the third builds an accurate model of an underwater structure which is consistent and complete. In particular we demonstrate that each approach can be abstracted to a core problem of incremental estimation within a sparse graph of the AUV’s trajectory and the locations of features of interest which can be updated and optimized in real time on board the AUV.QC 20150326</p

    MC EMiNEM Maps the Interaction Landscape of the Mediator

    Get PDF
    The Mediator is a highly conserved, large multiprotein complex that is involved essentially in the regulation of eukaryotic mRNA transcription. It acts as a general transcription factor by integrating regulatory signals from gene-specific activators or repressors to the RNA Polymerase II. The internal network of interactions between Mediator subunits that conveys these signals is largely unknown. Here, we introduce MC EMiNEM, a novel method for the retrieval of functional dependencies between proteins that have pleiotropic effects on mRNA transcription. MC EMiNEM is based on Nested Effects Models (NEMs), a class of probabilistic graphical models that extends the idea of hierarchical clustering. It combines mode-hopping Monte Carlo (MC) sampling with an Expectation-Maximization (EM) algorithm for NEMs to increase sensitivity compared to existing methods. A meta-analysis of four Mediator perturbation studies in Saccharomyces cerevisiae, three of which are unpublished, provides new insight into the Mediator signaling network. In addition to the known modular organization of the Mediator subunits, MC EMiNEM reveals a hierarchical ordering of its internal information flow, which is putatively transmitted through structural changes within the complex. We identify the N-terminus of Med7 as a peripheral entity, entailing only local structural changes upon perturbation, while the C-terminus of Med7 and Med19 appear to play a central role. MC EMiNEM associates Mediator subunits to most directly affected genes, which, in conjunction with gene set enrichment analysis, allows us to construct an interaction map of Mediator subunits and transcription factors

    Can we use atmospheric CO<sub>2</sub> measurements to verify emission trends reported by cities? Lessons from a 6-year atmospheric inversion over Paris

    Get PDF
    Existing CO2 emissions reported by city inventories usually lag in real-time by a year or more and are prone to large uncertainties. This study responds to the growing need for timely and precise estimation of urban CO2 emissions to support present and future mitigation measures and policies. We focus on the Paris metropolitan area, the largest urban region in the European Union and the city with the densest atmospheric CO2 observation network in Europe. We performed long-term atmospheric inversions to quantify the citywide CO2 emissions, i.e., fossil fuel as well as biogenic sources and sinks, over 6 years (2016–2021) using a Bayesian inverse modeling system. Our inversion framework benefits from a novel near-real-time hourly fossil fuel CO2 emission inventory (Origins.earth) at 1 km spatial resolution. In addition to the mid-afternoon observations, we attempt to assimilate morning CO2 concentrations based on the ability of the Weather Research and Forecasting model with Chemistry (WRF-Chem) transport model to simulate atmospheric boundary layer dynamics constrained by observed layer heights. Our results show a long-term decreasing trend of around 2 % ± 0.6 % per year in annual CO2 emissions over the Paris region. The impact of the COVID-19 pandemic led to a 13 % ± 1 % reduction in annual fossil fuel CO2 emissions in 2020 with respect to 2019. Subsequently, annual emissions increased by 5.2 % ± 14.2 % from 32.6 ± 2.2 Mt CO2 in 2020 to 34.3 ± 2.3 Mt CO2 in 2021. Based on a combination of up-to-date inventories, high-resolution atmospheric modeling and high-precision observations, our current capacity can deliver near-real-time CO2 emission estimates at the city scale in less than a month, and the results agree within 10 % with independent estimates from multiple city-scale inventories.</p
    corecore